Back

Plant Phenomics

Elsevier BV

All preprints, ranked by how well they match Plant Phenomics's content profile, based on 17 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.

1
Transformer-Based Phenotyping of Rice Root Aerenchyma Across Environments Enables Climate-Smart Rice Selection

Atef, H.; Fierro-Dominguez, L.; Lozano-Montana, P.; Navarro-Sanz, S.; Bals, J.; Clerget, B.; Perin, C.; Maria Camila, R.; Fernandez, R.

2026-02-03 physiology 10.64898/2026.01.30.702889 medRxiv
Top 0.1%
71.7%
Show abstract

Quantification of root anatomical traits such as cortical aerenchyma is key to understanding rice adaptation to diverse water regimes. Recently, the role of aerenchyma in regulating methane emissions has been demonstrated, making it a target for climate change mitigation. Despite its importance, breeding for root anatomical traits remains limited because manual analysis of root cross-sections is labor-intensive, inconsistent, and poorly scalable, and analysis pipelines do not generalize across heterogeneous imaging conditions. We present a deep learning pipeline based on a recent vision transformer architecture to automatically segment rice root anatomical structures and quantify aerenchyma. The model was trained on a multi-environment dataset of 1,760 annotated rice root cross-sections acquired across growth stages, cultivation systems, and countries, using a collaboratively defined annotation protocol. The model achieved high segmentation performance (mean Intersection-over-Union > 0.92) and near-perfect aerenchyma ratio quantification (R2 = 0.98), and was evaluated by two experts as performing on par with, and in some cases better than, expert annotators. Delivered as open-source software with an online interactive demonstrator, the pipeline revealed differences in aerenchyma across genotypes, water regimes, environments, and developmental stages. Overall, this work demonstrates that transformer-based segmentation enables high-throughput anatomical phenotyping, supporting scalable and climate-smart rice breeding. HIGHLIGHTSO_LITransformer-based segmentation enables robust aerenchyma phenotyping across environments C_LIO_LIA SegFormer model achieves expert-level accuracy on diverse rice root cross-sections C_LIO_LIAutomated analysis delivers near-perfect lacuna-to-cortex ratio quantification (R2 {approx} 0.98) C_LIO_LIOur online demonstrator supports scalable, climate-smart rice breeding applications C_LI

2
FieldDino: High-throughput physio-morphological phenotyping of stomatal characteristics for plant breeding research

Chaplin, E. D.; Coleman, G. R.; Merchant, A.; Salter, W. T.

2024-10-14 physiology 10.1101/2024.10.08.617327 medRxiv
Top 0.1%
61.2%
Show abstract

Stomatal anatomy and physiology define CO2 availability for photosynthesis and regulate plant water use. Despite being key drivers of yield and dynamic responsiveness to abiotic stresses, conventional measurement techniques of stomatal traits are laborious and slow, limiting adoption in plant breeding. Advances in instrumentation and data analyses present an opportunity to screen stomatal traits at scales relevant to plant breeding. We present a high-throughput field-based phenotyping approach, FieldDino, for screening of stomatal physiology and anatomy. The method allows coupled measurements to be collected in <15 s and consists of: (1) stomatal conductance measurements using a handheld porometer; (2) in situ collection of epidermal images with a digital microscope, 3D-printed leaf clip and Python-based app; and (3) automated deep learning analysis of stomatal features. The YOLOv8-M model trained on images collected in the field achieved strong performance metrics with an mAP@0.5 of 97.1% for stomatal detection. Validation in large field trials of 200 wheat genotypes with two irrigation treatments captured wide diversity in stomatal traits. FieldDino enables stomatal data collection and analysis at unprecedented scales in the field. This will advance research on stomatal biology and accelerate the incorporation of stomatal traits into plant breeding programs for resilience to abiotic stress. HighlightChaplin et al., have developed FieldDino which enables rapid, high-throughput phenotyping of stomatal traits, advancing plant breeding research by integrating streamlined in-field measurements with automated deep learning analysis.

3
MultiSpecies Canopy Segmentation: Interactive Machine-Learning and Pseudo-Labelling are key

Rongione, C.; Smith, A. G.; Draye, X.; De Vleeschouwer, C.; Chevalier, C.; Lobet, G.

2025-12-10 bioengineering 10.64898/2025.12.07.692840 medRxiv
Top 0.1%
51.3%
Show abstract

This study investigates the challenge of creating datasets for training multiclass deep-learning segmentation models, specifically for segmenting multi-species canopy images. Creating training sets for deep-learning based segmentation of multispecies canopies is currently too labor-intensive and time-consuming to be viable. To address this challenge, we propose a novel pipeline that uses fully convolutional neural networks (FCNNs) to transition from single-species images to segmented multi-species images. This paper demonstrates that FCNNs can effectively generalize learning from single-species canopy images to multispecies canopy images, achieving accurate pixel classification in mixed species canopies even when the network was trained only on images of single-species canopies. Additionally, we introduce Interactive Machine Learning and pseudo labeling as a method for generating a single-species canopy training set in a matter of minutes. We also present two software packages to implement our approach and extensively evaluate them against several baselines. Our findings demonstrate that our approach can significantly reduce the human time load required for semantic segmentation of multispecies canopy images, achieving over 90% accuracy in less than 10 minutes. This new method has the potential to greatly facilitate the study of multispecies canopies.

4
High-Throughput Phenotyping of Seed Quality Traits Using Imaging and Deep Learning in Dry Pea

Morales, M.; Worral, H.; Piche, L.; Atanda, S. A.; Dariva, F.; Ramos, C.; Hoang, K.; Yan, C.; Flores, P.; Bandillo, N.

2024-03-06 plant biology 10.1101/2024.03.05.583564 medRxiv
Top 0.1%
41.6%
Show abstract

Seed traits, such as seed color and seed size, directly impact seed quality, affecting the marketability and value of dry peas [1]. Assessing seed quality is integral to a plant breeding programs to ensure optimal seed standards. This research introduced a phenotyping tool to assess seed quality traits specifically tailored for pulse crops, which integrates image processing with cutting-edge deep learning models. The proposed method is designed for automation, seamlessly processing a sequence of images while minimizing human intervention. The pipeline standardized red-green-blue (RGB) images captured from a color light box and used deep learning models to segment and detect seed features. Our method extracted up to 86 distinct seed characteristics, ranging from basic size metrics to intricate texture details and color nuances. Compared to traditional methods, our pipeline demonstrated a 95 percent similarity in seed quality assessment and increased time efficiency (from 2 weeks to 30 minutes for processing time). Specifically, we observed an improvement in the accuracy of seed trait identification by simply using an RGB value instead of a categorical, non-standard description, which allowed for an increase in the range of detectable seed quality characteristics. By integrating conventional image processing techniques with foundational deep learning models, this approach emerges as a pivotal instrument in pulse breeding programs, guaranteeing the maintenance of superior seed quality standards.

5
Rhizonet: Image Segmentation for Plant Root in Hydroponic Ecosystem

Ushizima, D.; Sordo, Z.; Andeer, P.; Sethian, J.; Northen, T.

2023-11-21 plant biology 10.1101/2023.11.20.565580 medRxiv
Top 0.1%
40.8%
Show abstract

Digital cameras have the ability to capture daily images of plant roots, allowing for the estimation of root biomass. However, the complexities of root structures and noisy image backgrounds pose challenges for advanced phenotyping. Manual segmentation methods are laborious and prone to errors, which hinders experiments involving several plants. This paper introduces Rhizonet, a supervised deep learning approach for semantic segmentation of plant root images. Rhizonet harnesses a Residual U-Net backbone to enhance prediction accuracy, incorporating a convex hull operation to precisely outline the largest connected component. The primary objective is to accurately segment the biomass of the roots and analyze their growth over time. The input data comprises color images of various plant samples within a hydroponic environment known as EcoFAB, subject to specific nutrition treatments. Validation tests demonstrate the robust generalization of the model across experiments. This research pioneers advances in root segmentation and phenotype analysis by standardizing processes and facilitating the analysis of thousands of images while reducing subjectivity. The proposed root segmentation algorithms contribute significantly to the precise assessment of the dynamics of root growth under diverse plant conditions.

6
Cassava Detection from UAV Images Using YOLOv5 Object Detection Model: Towards Weed Control in a Cassava Farm

Nnadozie, E. C.; Iloanusi, O.; Ani, O.; Yu, K.

2022-11-17 bioengineering 10.1101/2022.11.16.516748 medRxiv
Top 0.1%
40.6%
Show abstract

Most deep learning-based weed detection methods either yield high accuracy, but are slow for real-time applications or too computationally intensive for implementation on smaller devices usable on resource-constrained platforms like UAVs; on the other hand, most of the faster methods lack good accuracy. In this work, two versions of the deep learning-based YOLOv5 object detection model - YOLOv5n and YOLOv5s - were evaluated for cassava detection as a step towards real-time weed detection. The performance of the models were compared when trained with different image resolutions. The robustness of the models were also evaluated under varying field conditions like illumination, weed density, and crop growth stages. YOLOv5s showed the best accuracy whereas YOLOv5n had the best inference speed. For similar image resolutions, YOLOv5s performed better, however, training YOLOv5n with higher image resolutions could yield better performance than training YOLOv5s with lower image resolutions. Both models were robust to variations in field conditions. The speed vs accuracy plot highlighted a range of possible speed/accuracy trade-offs to guide real-time deployment of the object detection models for cassava detection.

7
Towards high throughput in-field detection and quantification of wheat foliar diseases with deeplearning

Zenkl, R.; McDonald, B. A.; Walter, A.; Anderegg, J.

2024-05-13 plant biology 10.1101/2024.05.10.593608 medRxiv
Top 0.1%
37.0%
Show abstract

1Reliable, quantitative information on the presence and severity of crop diseases is critical for site-specific crop management and resistance breeding. Successful analysis of leaves under naturally variable lighting, presenting multiple disorders, and across phenological stages is a critical step towards high-throughput disease assessments directly in the field. Here, we present a dataset comprising 422 high resolution images of flattened leaves captured under variable outdoor lighting with polygon annotations of leaves, leaf necrosis and insect damage as well as point annotations of Septoria tritici blotch (STB) fruiting bodies (pycnidia) and rust pustules. Based on this dataset, we demonstrate the capability of deep learning for keypoint detection of pycnidia (F 1 = 0.76) and rust pustules (F 1 = 0.77) combined with semantic segmentation of leaves (IoU = 0.96), leaf necrosis (IoU = 0.77) and insect damage(IoU = 0.69) to reliably detect and quantify the presence of STB, leaf rusts, and insect damage under natural outdoor conditions. An analysis of intra- and inter-annotator agreement on selected images demonstrated that the proposed method achieved a performance close to that of annotators in the majority of the scenarios. We validated the generalization capabilities of the proposed method by testing it on images of unstructured canopies acquired directly in the field and with-out manual interaction with single leaves. The corresponding imaging procedure can be adapted to support automated data acquisition. Model predictions were in good agreement with visual assessments of in-focus regions in these images, despite the presence of new challenges such as variable orientation of leaves and more complex lighting. This underscores the principle feasibility of diagnosing and quantifying the severity of foliar diseases under field conditions using the proposed imaging setup and image processing methods. By demonstrating the ability to diagnose and quantify the severity of multiple diseases in highly natural complex scenarios, we lay out the groundwork for a significantly more efficient, non-invasive in-field analysis of foliar diseases that can support resistance breeding and the implementation of core principles of precision agriculture.

8
CitriBEiTNet: A Hybrid CNN-Transformer Architecture Combining MobileNetV2 with BEiT's Global Attention for Automated Citrus Leaf Disease Diagnosis

Eman, H.; Shah, S. M. A.; Ahmad, R. W.; Ghaffar, A.; Khan, H. A.

2025-12-12 bioengineering 10.64898/2025.12.09.693306 medRxiv
Top 0.1%
33.8%
Show abstract

Citrus farming plays an essential role in agriculture; however, diseases like canker, greening, black spot, and melanose significantly reduce yield and fruit quality. Efficient classification of citrus leaf diseases is important for crop health maintenance and optimal crop yield. Traditional methods for leaf disease detection are slow, labor-intensive, and often inaccurate, which highlights the need for automated solutions. This research presents a novel hybrid approach for identifying citrus diseases by combining a vision transformer with deep learning architectures. Using Bidirectional Encoder Representation from Image Transformers (BEIT) and MobileNetV2 as feature extractors, the proposed model captures distinctive features from images, which are then classified using Support Vector Machine (SVM). The dataset includes four different disease categories and a healthy class. Data augmentation techniques are applied to improve model robustness. The experimental findings demonstrate that CitriBEiTNet achieves a remarkable training accuracy of 99.82% and a testing accuracy of 99.57%, outperforming current leading techniques. This model provides an efficient, scalable, and economical approach for early disease identification, enabling farmers to take preventive measures and improve agricultural yields.

9
The Global Wheat Full Semantic Organ Segmentation (GWFSS) dataset

Wang, Z.; Zenkl, R.; Greche, L.; De Solan, B.; Bernigaud Samatan, L.; Ouahid, S.; Visioni, A.; Robles-Zazueta, C. A.; Pinto, F.; Perez-Olivera, I.; Reynolds, M. P.; Zhu, C.; Liu, S.; D'argaignon, M.-P.; Lopez-Lozano, R.; Weiss, M.; Marzougui, A.; Roth, L.; Dandrifosse, S.; Carlier, A.; Dumont, B.; Mercatoris, B.; Fernandez, J.; Chapman, S.; Najafian, K.; Stavness, I.; Wang, H.; Guo, W.; Virlet, N.; Hawkesford, M.; Chen, Z.; David, E.; Gillet, J.; Irfan, K.; Comar, A.; Hund, A.

2025-03-19 plant biology 10.1101/2025.03.18.642594 medRxiv
Top 0.1%
33.4%
Show abstract

Computer vision is increasingly used in farmers fields and agricultural experiments to quantify important traits. Imaging setups with a sub-millimetre ground sampling distance enable the detection and tracking of plant features, including size, shape, and colour. Although todays AI-driven foundation models segment almost any object in an image, they still fail for complex plant canopies. To improve model performance, the global wheat dataset consortium assembled a diverse set of images from experiments around the globe. After the head detection dataset (GWHD), the new dataset targets a full semantic segmentation (GWFSS) of wheat organs (leaves, stems and spikes) covering all developmental stages. Images were collected by 11 institutions using a wide range of imaging setups. Two datasets are provided: i) a set of 1096 diverse images in which all organs were labelled at the pixel level, and (ii) a dataset of 52,078 images without annotations available for additional training. The labelled set was used to train segmentation models based on DeepLabV3Plus and Segformer. Our Segformer model performed slightly better than DeepLabV3Plus with a mIOU for leaves and spikes of ca. 90%. However, the precision for stems with 54% was rather lower. The major advantages over published models are: i) the exclusion of weeds from the wheat canopy, ii) the detection of all wheat features including necrotic and senescent tissues and its separation from crop residues. This facilitates further development in classifying healthy vs. unhealthy tissue to address the increasing need for accurate quantification of senescence and diseases in wheat canopies.

10
RootNet: A Convolutional Neural Networks for Complex Plant Root Phenotyping from High-Definition Datasets

Yasrab, R.; Pound, M.; French, A.; Pridmore, T.

2020-05-02 plant biology 10.1101/2020.05.01.073270 medRxiv
Top 0.1%
33.3%
Show abstract

Plant phenotyping using machine learning and computer vision approaches is a challenging task. Deep learning-based systems for plant phenotyping is more efficient for measuring different plant traits for diverse genetic discoveries compared to the traditional image-based phenotyping approaches. Plant biologists have recently demanded more reliable and accurate image-based phenotyping systems for assessing various features of plants and crops. The core of these image-based phenotyping systems is structural classification and features segmentation. Deep learning-based systems, however, have shown outstanding results in extracting very complicated features and structures of above-ground plants. Nevertheless, the below-ground part of the plant is usually more complicated to analyze due to its complex arrangement and distorted appearance. We proposed a deep convolutional neural networks (CNN) model named "RootNet" that detects and pixel-wise segments plant roots features. The feature of the proposed method is detection and segmentation of very thin (1-3 pixels wide roots). The proposed approach segment high definition images without significantly sacrificing pixel density, it leads to more accurate root type detection and segmentation results. It is hard to train CNNs with high definition images due to GPU memory limitations. The proposed patch-based CNN training setup makes use of the entire image (with maximum pixel desisity) to recognize and segment give root system efficiently. We have used wheat (Triticum aestivum L.) seedlings dataset, which consists of wheat roots grown in visible pouches. The proposed system segments are given root systems and save it to the Root System Markup Language (RSML) for future analysis. RootNet trained on the dataset mentioned above along with popular semantic segmentation architectures, and it achieved a benchmark accuracy.

11
Recognition and localization of maize leaves in RGB images based on Point-Line Net

Liu, B.; Chang, J.; Hou, D.; Li, D.; Ruan, J.

2024-01-08 plant biology 10.1101/2024.01.08.574560 medRxiv
Top 0.1%
28.5%
Show abstract

Plant phenotype detection plays a crucial role in understanding and studying plant biology, agriculture, and ecology. It involves the quantification and analysis of various physical traits and characteristics of plants, such as plant height, leaf shape, angle, number, and growth trajectory. By accurately detecting and measuring these phenotypic traits, researchers can gain insights into plant growth, development, stress tolerance, and the influence of environmental factors. Among these phenotypic information, the number of leaves and growth trajectory of the plant are more accessible. Nonetheless, obtaining these information is labor-intensive and financially demanding. With the rapid development of computer vision technology and artificial intelligence, using maize field images to fully analyze plant-related information such as growth trajectory and number of leaves can greatly eliminate repetitive labor work and enhance the efficiency of plant breeding. However, the application of deep learning methods still faces challenges due to the serious occlusion problem and complex background of field plant images. In this study, we developed a deep learning method called Point-Line Net, which is based on the Mask R-CNN frame-work, to automatically recognize maize field images and determine the number and growth trajectory of leaves and roots. The experimental results demonstrate that the object detection accuracy (mAP) of our Point-Line Net can reach 81.5%. Moreover, to describe the position and growth of leaves and roots, we introduced a new lightweight "keypoint" detection branch that achieved 33.5 using our custom distance verification index. Overall, these findings provide valuable insights for future field plant phenotype detection, particularly for the datasets with dot and line annotations.

12
Automatic Traits Extraction and Fitting for Field High-throughput Phenotyping Systems

Guo, X.; Qiu, Y.; Nettleton, D.; Yeh, C.-T.; Zheng, Z.; Hey, S.; Schnable, P. S.

2020-09-10 plant biology 10.1101/2020.09.09.289769 medRxiv
Top 0.1%
28.2%
Show abstract

High-throughput phenotyping is a modern technology to measure plant traits efficiently and in large scale by imaging systems over the whole growth season. Those images provide rich data for statistical analysis of plant phenotypes. We propose a pipeline to extract and analyze the plant traits for field phenotyping systems. The proposed pipeline include the following main steps: plant segmentation from field images, automatic calculation of plant traits from the segmented images, and functional curve fitting for the extracted traits. To deal with the challenging problem of plant segmentation for field images, we propose a novel approach on image pixel classification by transform domain neural network models, which utilizes plant pixels from greenhouse images to train a segmentation model for field images. Our results show the proposed procedure is able to accurately extract plant heights and is more stable than results from Amazon Turks, who manually measure plant heights from original images.

13
AGIcam: An open-source IoT-based camera system for automated in-field phenotyping and yield prediction

Sangjan, W.; Pukrongta, N.; Buchanan, T.; Carter, A. H.; Pumphrey, M. O.; Sankaran, S.

2026-01-13 plant biology 10.64898/2026.01.13.699185 medRxiv
Top 0.1%
28.1%
Show abstract

Continuous, high-frequency monitoring is essential to capture rapid phenological transitions and dynamic crop responses to the environment. However, most phenotyping platforms lack the temporal resolution and automation required for consistent, season-long trait assessment. This study introduces AGIcam, an open-source IoT camera system for automated and continuous in-field plant phenotyping and yield prediction. The platform integrates solar-powered Raspberry Pi units with a modular software stack, comprising Node-RED, InfluxDB, Grafana, and Microsoft Azure, for automated data acquisition, transfer, and visualization. In the 2022 growing season, 18 AGIcam systems were deployed in spring and winter wheat breeding trials, maintaining an uptime of over 85% while capturing frequent RGB and NoIR imagery. Time-series vegetation indices derived from these images were used to predict yield using random forest and Long Short-Term Memory (LSTM) models. The LSTM approach achieved the highest accuracy approximately one week after heading, with mean prediction errors of 3.41% for spring wheat and 1.62% for winter wheat. These results highlight the potential of IoT-based platforms such as AGIcam to enable real-time, scalable, and effective phenotyping solutions for data-driven crop improvement.

14
Coupled functional physiological phenotyping and simulation model to estimate dynamic water use efficiency and infer transpiration sensitivity traits

Sun, T.; Cheng, R.; Sun, Y.; Jiang, R.; Wang, Z.; Fang, P.; Wu, X.; Ning, K.; Xu, P.

2022-12-05 physiology 10.1101/2022.10.10.511465 medRxiv
Top 0.1%
26.3%
Show abstract

As agricultural drought becomes more frequent worldwide, it is essential to improve crop productivity whilst reducing the water consumption to achieve a sustainable production. Plant transpiration rate and water use efficiency (WUE) collectively determine the yield performance, yet it is challenging to balance the two in breeding programs due to still insufficient mechanistic understanding of the traits. Here we demonstrate the feasibility and effectiveness of calculating dynamic and momentary WUE by coupling WUE model and the state-of-the-art functional physiological phenotyping (FPP). We also present the method of quantifying genotype-specific traits reflecting sensitivity of transpiration to radiation (STr-Rad) and vapor pressure deficit (STr-VPD), under evolving developmental stage and water availability. Using these methods, we revealed the genotypic difference of STr-Rad and STr-VPD in three watermelon accessions, the dramatic change in each of them across the drought treatment phases, and the quantitative impacts of them on dynamic WUE patterns. Based on our results and computational simulations, a general principle for transpiration ideotype design is proposed, which highlights the benefits of lowering STr-VPD to increase WUE and increasing STr-Rad to offset the decline of Tr. FPP-enabled phenomic selection will help screen for elite crops lines with desired transpiration sensitivities.

15
Botanic Spectrum Analyser: A Deep Learning GUI for Plant Image Segmentation in Hyperspectral and RGB Phenotyping

Walsh, J. J.; Gorgu, L.; Cavel, E.; Poulain, V.; Gutierrez, L.; Mangina, E.; Negrao, S.

2025-09-17 plant biology 10.1101/2025.09.14.676080 medRxiv
Top 0.1%
25.8%
Show abstract

Plant phenotyping systematically quantifies plant traits such as growth, morphology, physiology, or yield, assessing genetic and environmental influences on plant performance. The integration of advanced phenotyping technologies, including imaging sensors and data analytics, facilitates the non-destructive and longitudinal acquisition of high-throughput data. Nevertheless, the sheer volume of such phenotyping data introduces significant challenges for researchers, particularly related to data processing. To overcome these challenges, researchers are turning to artificial intelligence (AI), a tool that can autonomously process and learn from large amounts of data. Despite this advantage, accurate image segmentation remains a key hurdle due to the complexity of plant morphology and environmental noise. In this study, we present the Botanical Spectrum Analyser (BSA), a user-friendly graphical user interface (GUI) that integrates a modified U-Net deep neural network for plant image segmentation. Designed for accessibility, BSA enables non-technical users to apply advanced AI segmentation to RGB and hyperspectral (VNIR and SWIR) imagery. We evaluated BSAs performance across three case studies involving wheat, barley, and Arabidopsis, demonstrating its robustness across species and imaging modalities. Our results show that BSA achieves an average accuracy of 99.7%, with F1-scores consistently exceeding 98% and strong Jaccard and recall performance across datasets. For challenging root segmentation tasks, BSA outperformed commercial algorithms, achieving a 76% F1-score compared to 24%, representing a 50% improvement. These results highlight the adaptability of the BSA framework for diverse phenotyping scenarios, bridging the gap between advanced deep learning methods and accessible plant science applications.

16
An end-to-end workflow based on multimodal 3D imaging and machine learning for non-destructive diagnosis of grapevine trunk diseases

Fernandez, R.; le cunff, l.; Merigeaud, S.; Verdeil, J.-L.; Perry, J.; Larignon, P.; Spilmont, A.-S.; Chatelet, P.; Cardoso, M.; Goze-Bac, C.; Moisy, C.

2022-06-10 plant biology 10.1101/2022.06.09.495457 medRxiv
Top 0.1%
23.0%
Show abstract

Quantifying healthy and degraded inner tissues in plants is of great interest in agronomy, for example, to assess plant health and quality and monitor physiological traits or diseases. However, detecting functional and degraded plant tissues in-vivo without harming the plant is extremely challenging. New solutions are needed in ligneous and perennial species, for which the sustainability of plantations is crucial. To tackle this challenge, we developed a novel approach based on multimodal 3D imaging and Artificial Intelligence (AI)-based image processing that allowed a noninvasive diagnosis of inner tissues in living plants. The method was successfully applied to the grapevine (Vitis vinifera L.) in vineyards where sustainability was threatened by trunk diseases, while the sanitary status of vines cannot be ascertained without injuring the plants. By combining MRI and X-ray CT 3D imaging with an automatic voxel classification, we could discriminate intact, degraded, and white rot tissues with a mean global accuracy of over 91%. Each imaging modality contribution to tissue detection was evaluated, and we identified quantitative structural and physiological markers characterizing wood degradation steps. The combined study of inner tissue distribution versus external foliar symptom history demonstrated that white rot and intact tissue contents are key measurements in evaluating vines sanitary status. We finally proposed a model for an accurate trunk disease diagnosis in grapevine. This work opens new routes for precision agriculture and in-situ monitoring of wood quality and plant health across plant species.

17
A comparative study of plant phenotyping workflows based on three-dimensional reconstruction from multi-view images

Someno, D.; Noshita, K.

2024-03-25 plant biology 10.1101/2024.03.21.586185 medRxiv
Top 0.1%
22.8%
Show abstract

With the world facing escalating food demand, limited agricultural land, and environmental change, there is a growing need for data-driven sustainable agricultural management. Advances in sequencing and sensor networks have reduced costs of acquiring genomic and environmental data; however, collecting phenotypic data, crucial for monitoring plant growth and detecting pests and diseases, remains labor-intensive. Technological advances have enabled efficient collection of three-dimensional (3D) data, yet this process currently involves intricate steps. Therefore, developing effective phenotyping methods is essential. In this study, we developed a phenotyping process based on 3D data, including mask image generation using deep neural network models, 3D reconstruction using the Structure from Motion/Multi-View Stereo (SfM/MVS) pipeline, and surface reconstruction for leaf area estimation. Using soybean datasets, we found that a 1/5.4x magnification effectively generated mask images. Among four mask image usage scenarios in SfM/MVS, applying soybean-and-stage masks before SfM and only soybean masks after SfM yielded the highest-quality point cloud data with the second shortest processing time. Finally, we compared Poisson reconstruction and B-spline surface fitting in leaf area estimation; B-spline fitting showed greater correlation with destructive measurements. We propose an optimal workflow for estimating leaf area and provide tools and datasets for future phenotyping research.

18
Decrypting the complex phenotyping traits of plants by machine learning

Zdrazil, J.; Kong, L.; Klimes, P.; Jasso-Robles, F. I.; Saiz-Fernandez, I.; Guder, F.; Spichal, L.; Snasel, V.; De Diego, N.

2024-11-15 plant biology 10.1101/2024.11.14.623623 medRxiv
Top 0.1%
22.6%
Show abstract

Phenotypes, defining an organisms behaviour and physical attributes, arise from the complex, dynamic interplay of genetics, development, and environment, whose interactions make it enormously challenging to forecast future phenotypic traits of a plant at a given moment. This work reports AMULET, a modular approach that uses imaging-based high-throughput phenotyping and machine learning to predict morphological and physiological plant traits hours to days before they are visible. AMULET streamlines the phenotyping process by integrating plant detection, prediction, segmentation, and data analysis, enhancing workflow efficiency and reducing time. The machine learning models used data from over 30,000 plants, using the Arabidopsis thaliana-Pseudomonas syringae pathosystem. AMULET also demonstrated its adaptability by accurately detecting and predicting phenotypes of in vitro potato plants after minimal fine-tuning with a small dataset. The general approach implemented through AMULET streamlines phenotyping and will improve breeding programs and agricultural management by enabling pre-emptive interventions optimising plant health and productivity.

19
Learning from Synthetic Dataset for Crop Seed Instance Segmentation

Toda, Y.; Okura, F.; Ito, J.; Okada, S.; Kinoshita, T.; Tsuji, H.; Saisho, D.

2019-12-06 bioinformatics 10.1101/866921 medRxiv
Top 0.1%
22.5%
Show abstract

Incorporating deep learning in the image analysis pipeline has opened the possibility of introducing precision phenotyping in the field of agriculture. However, to train the neural network, a sufficient amount of training data must be prepared, which requires a time-consuming manual data annotation process that often becomes the limiting step. Here, we show that an instance segmentation neural network (Mask R-CNN) aimed to phenotype the barley seed morphology of various cultivars, can be sufficiently trained purely by a synthetically generated dataset. Our attempt is based on the concept of domain randomization, where a large amount of image is generated by randomly orienting the seed object to a virtual canvas. After training with such a dataset, performance based on recall and the average Precision of the real-world test dataset achieved 96% and 95%, respectively. Applying our pipeline enables extraction of morphological parameters at a large scale, enabling precise characterization of the natural variation of barley from a multivariate perspective. Importantly, we show that our approach is effective not only for barley seeds but also for various crops including rice, lettuce, oat, and wheat, and thus supporting the fact that the performance benefits of this technique is generic. We propose that constructing and utilizing such synthetic data can be a powerful method to alleviate human labor costs needed to prepare the training dataset for deep learning in the agricultural domain.

20
The shape and volume of air, kernels, and cracks, in a nutshell

Amezquita, E. J.; Quigley, M. Y.; Brown, P. J.; Munch, E.; Chitwood, D. H.

2023-09-28 plant biology 10.1101/2023.09.26.559651 medRxiv
Top 0.1%
22.4%
Show abstract

Walnuts are the second most produced and consumed tree nut, with over 2.6 million metric tons produced in the 2022-23 harvest cycle alone. The United States is the second largest producer, accounting for 25% of the total global supply. Nonetheless, producers face an ever-growing demand in a more uncertain climate landscape, which requires effective and efficient walnut selection and breeding of new cultivars with increased kernel content and easy-to-open shells. Past and current efforts select for these traits using hand-held calipers and eye-based evaluations. Yet there is plenty of morphology that meets the eye but goes unmeasured, such as the volume of inner air or the convexity of the kernel. Here, we study the shape of walnut fruits based on X-ray CT (Computed Tomography) 3D reconstructions. We compute 49 different morphological phenotypes for 1264 individuals comprising 149 accessions. These phenotypes are complemented by traits of breeding interest such as ease of kernel removal and kernel weight. Through allometric relationships --relative growth of one tissue to another--, we identify possible biophysical constraints at play during development. We explore multiple correlations between all morphological and commercial traits, and identify which morphological traits can explain the most variability of commercial traits. We show that using only volume and thickness-based traits, especially inner air content, we can successfully encode several of the commercial traits. Core IdeasO_LIX-ray Computed Tomography (CT) imaging is used to compute a broad array of morpho-logical phenotypes in walnuts. C_LIO_LIThese morphological traits suggest biophysical constraints at play during walnut development. C_LIO_LIRelative inner air, shell, and packing tissue volumes are significantly correlated to the rest of shape phenotypes. C_LIO_LIThese volumes produce the best prediction models for traits of commercial interest such as shell strength. C_LIO_LIInexpensive phenotyping platforms that focus solely on volume measurement would enable better walnut breeding. C_LI